Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Turk J Phys Med Rehabil ; 69(1): 116-120, 2023 Mar.
Article in English | MEDLINE | ID: covidwho-20231942

ABSTRACT

Local glucocorticoid injections are used in the treatment of isolated sacroiliitis in patients with spondyloarthritis. Sacroiliac joint injections can be performed intraarticularly or periarticularly. Since the accuracy of blind injections is low, fluoroscopy, magnetic resonance imaging, computed tomography, or ultrasonography guidance are used to increase the accuracy of sacroiliac joint injections. Currently, imaging fusion software is successfully used in sacroiliac joint interventions with three-dimensional anatomic information added to ultrasonography. Herein, we present two cases of sacroiliac joint corticosteroid injections under ultrasonography-magnetic resonance imaging fusion guidance.

2.
2022 International Interdisciplinary Conference on Mathematics, Engineering and Science, MESIICON 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2315142

ABSTRACT

The deadfall widespread of coronavirus (SARS-Co V-2) disease has trembled every part of the earth and has significant disruption to health support systems in different countries. In spite of such existing difficulties and disagreements for testing the coronavirus disease, an advanced and low-cost technique is required to classify the disease. For the sense of reason, supervised machine learning (ML) along with image processing has turned out as a strong technique to detect coronavirus from human chest X-rays. In this work, the different methodologies to identify coronavirus (SARS-CoV-2) are discussed. It is essential to expand a fully automatic detection system to restrict the carrying of the virus load through contact. Various deep learning structures are present to detect the SARS-CoV-2 virus such as ResNet50, Inception-ResNet-v2, AlexNet, Vgg19, etc. A dataset of 10,040 samples has been used in which the count of SARS-CoV-2, pneumonia and normal images are 2143, 3674, and 4223 respectively. The model designed by fusion of neural network and HOG transform had an accuracy of 98.81% and a sensitivity of 98.65%. © 2022 IEEE.

3.
12th International Conference on Electrical and Computer Engineering, ICECE 2022 ; : 112-115, 2022.
Article in English | Scopus | ID: covidwho-2292098

ABSTRACT

Coronavirus disease (COVID-19) is an infectious disease caused by the SARS-CoV-2 virus. Early diagnosis is only the proactive process to resist against the unwanted death. However, machine vision-based diagnosis systems show unparalleled success with higher accuracy and low false diagnosis rate. Working with the proposed method, this research has found that Computed Tomography (CT) provides more satisfactory outcomes regarding all the performance metrics. The proposed method uses a feature hybridization technique of concatenating the textural features with neural features. The literature review suggests that medical experts recommended chest CT in covid diagnosis rather than chest X-ray as well as RT-PCR. It is found that chest CT is more effective in diagnosis for being low false-negative rate. Moreover, the proposed method has used segmentation technique to dig the potential region of interest and obtain accurate features. Compared with different CNN classifier, such as, VGG-16, AlexNet, VGG-19 or ResNet50 and scratch model also. To obtain the satisfactory performance VGG-19 was used in this study. The Proposed machine learning based fusion technique achieves superior performance according to COVID-19 positive or negative with the accuracy of 98.63%, specificity of 99.08% and sensitivity of 98.18%. © 2022 IEEE.

4.
Information Processing and Management ; 60(4), 2023.
Article in English | Scopus | ID: covidwho-2306369

ABSTRACT

To improve the effect of multimodal negative sentiment recognition of online public opinion on public health emergencies, we constructed a novel multimodal fine-grained negative sentiment recognition model based on graph convolutional networks (GCN) and ensemble learning. This model comprises BERT and ViT-based multimodal feature representation, GCN-based feature fusion, multiple classifiers, and ensemble learning-based decision fusion. Firstly, the image-text data about COVID-19 is collected from Sina Weibo, and the text and image features are extracted through BERT and ViT, respectively. Secondly, the image-text fused features are generated through GCN in the constructed microblog graph. Finally, AdaBoost is trained to decide the final sentiments recognized by the best classifiers in image, text, and image-text fused features. The results show that the F1-score of this model is 84.13% in sentiment polarity recognition and 82.06% in fine-grained negative sentiment recognition, improved by 4.13% and 7.55% compared to the optimal recognition effect of image-text feature fusion, respectively. © 2023 Elsevier Ltd

5.
Evolving Systems ; 2023.
Article in English | Scopus | ID: covidwho-2269831

ABSTRACT

The lungs of patients with COVID-19 exhibit distinctive lesion features in chest CT images. Fast and accurate segmentation of lesion sites from CT images of patients' lungs is significant for the diagnosis and monitoring of COVID-19 patients. To this end, we propose a progressive dense residual fusion network named PDRF-Net for COVID-19 lung CT segmentation. Dense skip connections are introduced to capture multi-level contextual information and compensate for the feature loss problem in network delivery. The efficient aggregated residual module is designed for the encoding-decoding structure, which combines a visual transformer and the residual block to enable the network to extract richer and minute-detail features from CT images. Furthermore, we introduce a bilateral channel pixel weighted module to progressively fuse the feature maps obtained from multiple branches. The proposed PDRF-Net obtains good segmentation results on two COVID-19 datasets. Its segmentation performance is superior to baseline by 11.6% and 11.1%, and outperforming other comparative mainstream methods. Thus, PDRF-Net serves as an easy-to-train, high-performance deep learning model that can realize effective segmentation of the COVID-19 lung CT images. © 2023, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

6.
IEEE Access ; 11:16621-16630, 2023.
Article in English | Scopus | ID: covidwho-2281059

ABSTRACT

Medical image segmentation is a crucial way to assist doctors in the accurate diagnosis of diseases. However, the accuracy of medical image segmentation needs further improvement due to the problems of many noisy medical images and the high similarity between background and target regions. The current mainstream image segmentation networks, such as TransUnet, have achieved accurate image segmentation. Still, the encoders of such segmentation networks do not consider the local connection between adjacent chunks and lack the interaction of inter-channel information during the upsampling of the decoder. To address the above problems, this paper proposed a dual-encoder image segmentation network, including HarDNet68 and Transformer branch, which can extract the local features and global feature information of the input image, allowing the segmentation network to learn more image information, thus improving the effectiveness and accuracy of medical segmentation. In this paper, to realize the fusion of image feature information of different dimensions in two stages of encoding and decoding, we propose a feature adaptation fusion module to fuse the channel information of multi-level features and realize the information interaction between channels, and then improve the segmentation network accuracy. The experimental results on CVC-ClinicDB, ETIS-Larib, and COVID-19 CT datasets show that the proposed model performs better in four evaluation metrics, Dice, Iou, Prec, and Sens, and achieves better segmentation results in both internal filling and edge prediction of medical images. Accurate medical image segmentation can assist doctors in making a critical diagnosis of cancerous regions in advance, ensure cancer patients receive timely targeted treatment, and improve their survival quality. © 2013 IEEE.

7.
25th International Conference on Computer and Information Technology, ICCIT 2022 ; : 903-908, 2022.
Article in English | Scopus | ID: covidwho-2248579

ABSTRACT

The Covid 19 beta coronavirus, commonly known as the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is currently one of the most significant RNA-type viruses in human health. However, more such epidemics occurred beforehand because they were not limited. Much research has recently been carried out on classifying the disease. Still, no automated diagnostic tools have been developed to identify multiple diseases using X-ray, Computed Tomography (CT) scan, or Magnetic Resonance Imaging (MRI) images. In this research, several Tate-of-the-art techniques have been applied to the Chest-Xray, CT scan, and MRI segmented images' datasets and trained them simultaneously. Deep learning models based on VGG16, VGG19, InceptionV3, ResNet50, Capsule Network, DenseNet architecture, Exception and Optimized Convolutional Neural Network (Optimized CNN) were applied to the detecting of Covid-19 contaminated situation, Alzheimer's disease, and Lung infected tissues. Due to efforts taken to reduce model losses and overfitting, the models' performances have improved in terms of accuracy. With the use of image augmentation techniques like flip-up, flip-down, flip-left, flip-right, etc., the size of the training dataset was further increased. In addition, we have proposed a mobile application by integrating a deep learning model to make the diagnosis faster. Eventually, we applied the Image fusion technique to analyze the medical images by extracting meaningful insights from the multimodal imaging modalities. © 2022 IEEE.

8.
45th European Conference on Information Retrieval, ECIR 2023 ; 13982 LNCS:557-567, 2023.
Article in English | Scopus | ID: covidwho-2263971

ABSTRACT

In this paper, we provide an overview of the upcoming ImageCLEF campaign. ImageCLEF is part of the CLEF Conference and Labs of the Evaluation Forum since 2003. ImageCLEF, the Multimedia Retrieval task in CLEF, is an ongoing evaluation initiative that promotes the evaluation of technologies for annotation, indexing, and retrieval of multimodal data with the aim of providing information access to large collections of data in various usage scenarios and domains. In its 21st edition, ImageCLEF 2023 will have four main tasks: (i) a Medical task addressing automatic image captioning, synthetic medical images created with GANs, Visual Question Answering for colonoscopy images, and medical dialogue summarization;(ii) an Aware task addressing the prediction of real-life consequences of online photo sharing;(iii) a Fusion task addressing late fusion techniques based on the expertise of a pool of classifiers;and (iv) a Recommending task addressing cultural heritage content-recommendation. In 2022, ImageCLEF received the participation of over 25 groups submitting more than 258 runs. These numbers show the impact of the campaign. With the COVID-19 pandemic now over, we expect that the interest in participating, especially at the physical CLEF sessions, will increase significantly in 2023. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

9.
6th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), I-SMAC 2022 ; : 916-925, 2022.
Article in English | Scopus | ID: covidwho-2213193

ABSTRACT

Early in 2020, the global spread of Coronavirus Disease 2019 (COVID-19) triggered an existential health crisis. Automated lung infection diagnosis using Computed Tomography (CT) images has the potential to significantly improve the current healthcare approach to combat COVID-19. But segmenting infected regions from CT slices is difficult due to the wide variety in infection traits and the weak contrast between infected and healthy tissues. Additionally, gathering a lot of data quickly is impractical, which hinders the training of a deep model. This study proposes COVID-SegNet, a convolutional-based deep learning technique for automatically segmenting COVID-19 infection areas and the whole lungs from chest CT images. The suggested deep CNN includes a feature variation (FV) block that adaptively modifies the global properties of the features for segmenting COVID-19 infection. This can improve its capacity to express features in various situations efficiently and adaptively. To deal with the complex shape variations of COVID-19 infection zones, additionally recommend the use of PASPP, a progressive atrous spatial pyramid pooling. After a simple convolution module, PASPP generates the final features using multistage parallel fusion branches. In order to cover a variety of receptive fields, PASPP uses atrous filters with an acceptable dilation rate in each atrous convolutional layer. For the segmentation of COVID-19 and the lungs, the dice similarity coefficients are 0.987 as well as 0.726, respectively. Experiments carried out on data gathered in the scan centre demonstrate that effectively produce good performance. © 2022 IEEE.

10.
4th International Conference on Communications, Information System and Computer Engineering, CISCE 2022 ; : 605-608, 2022.
Article in English | Scopus | ID: covidwho-2018629

ABSTRACT

The pneumonia epidemic spread by the 2019 new coronavirus(2019-nCoV) has affected people's lives in any aspects, and has aroused widespread concern in global public opinion. In order to better grasp the real public opinion situation on the Internet and ensure the progress of epidemic prevention and public opinion analysis, this paper conducts research on netizen sentiment analysis for epidemic-related topics in the Internet community, and proposes a multimodal feature fusion solution. For the fusion of image and text modalities, Bi-LSTM and Bi-GRU are used to further learn the intrinsic correlation between modalities on the basis of bidirectional transformer feature fusion, and an image-based multi-scale feature fusion method is proposed, which can better solve the problem in this task. Experiments show that the method proposed in this paper is better than the current mainstream multimodal sentiment analysis methods. © 2022 IEEE.

11.
2021 IEEE International Conference on Technology, Research, and Innovation for Betterment of Society, TRIBES 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1831873

ABSTRACT

In December 2019, the new disease COVID-19 was initially discovered in Wuhan, China and with a fast pace, it took over the whole world. It has impacted everyone's health, as well as the global economy and people's daily lives. It has become crucial that all the positive cases be detected quickly so it is possible to save other lives. Still, the lack of doctors and the lower availability of the test kits made it an arduous task. Recent research shows that radiological imaging techniques have played a valuable role in the detection of COVID-19. The use of artificial intelligence technology with radiological images can help identify the disease very accurately. Even in remote areas, it can be beneficial to overcome the shortage of doctors. This study proposed a method based on the aggregation of the extracted hand-crafted features with the automated ones. We used a his-togram of oriented gradients (HOG) for the hand-crafted features extraction. In addition, several techniques are investigated to get the deep learned features such as "DenseNet201","Inception ResnetV2", "VGG16","VGG19", "Inception_V3", "Resnet50", "MobileNet_V2"and "Xception"out of which "VGG19"gives optimal performance. Furthermore, for dimensionality reduction and to maintain the consistency of features, "principal component analysis (PCA)"is used. Our experiments on COVID-19 image datasets revealed that the proposed method achieves 99% classification accuracy in classifying normal and pneumonia X-ray images. © 2021 IEEE.

12.
Front Comput Neurosci ; 15: 803724, 2021.
Article in English | MEDLINE | ID: covidwho-1715022

ABSTRACT

Medical image fusion has an indispensable value in the medical field. Taking advantage of structure-preserving filter and deep learning, a structure preservation-based two-scale multimodal medical image fusion algorithm is proposed. First, we used a two-scale decomposition method to decompose source images into base layer components and detail layer components. Second, we adopted a fusion method based on the iterative joint bilateral filter to fuse the base layer components. Third, a convolutional neural network and local similarity of images are used to fuse the components of the detail layer. At the last, the final fused result is got by using two-scale image reconstruction. The contrast experiments display that our algorithm has better fusion results than the state-of-the-art medical image fusion algorithms.

13.
12th Indian Conference on Computer Vision, Graphics and Image Processing, ICVGIP 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1605834

ABSTRACT

One of the main challenges in controlling the spread of COVID19 pandemic is to diagnose infection early. The most reliable method RT - PCR takes several hours to give results. Although the Anti-Body (Serological) test gives the results in a few hours, it is not accurate, and hence it is not reliable. Moreover, they are invasive. Another issue with these methods is that the number of labs performing these tests are very limited. It will be beneficial if the already existing clinical infrastructure is used for diagnosing COVID19 accurately in real time. Recently chest CT images are used by researchers to diagnose the COVID19 with impressive accuracy. The state of the art method for detecting COVID19 using CT chest images involves Deep Learning. Deep Learning is expected to provide accurate and reliable results only when the model is trained on a large data set. Due to non-availability of a large data set the existing models have been trained on a smaller size data set. Therefore it would be better to design a model to give good accuracy with reliability. To achieve accuracy along with reliability we proposed a COVID19 detection model with the combination of deep learning model and the traditional machine learning model. The novelty of the proposed model is the fusion of image quality and deep learning. The proposed method outperformed the state of the art method in terms of accuracy, recall and F1 score (more than 99 % in almost all the metrics) on a benchmark data set. The efficacy of the selected features and also explainability of the method are demonstrated through various tests. © 2021 ACM.

14.
PeerJ Comput Sci ; 7: e364, 2021.
Article in English | MEDLINE | ID: covidwho-1079813

ABSTRACT

BACKGROUND AND PURPOSE: COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people's health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. MATERIALS AND METHODS: In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. RESULTS: Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. CONCLUSIONS: A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.

SELECTION OF CITATIONS
SEARCH DETAIL